Training Decision Trees as Replacement for Convolution Layers

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Finding Influential Training Samples for Gradient Boosted Decision Trees

We address the problem of finding influential training samples for a particular case of tree ensemble-based models, e.g., Random Forest (RF) or Gradient Boosted Decision Trees (GBDT). A natural way of formalizing this problem is studying how the model’s predictions change upon leave-one-out retraining, leaving out each individual training sample. Recent work has shown that, for parametric model...

متن کامل

Decision trees as possibilistic classifiers

This paper addresses the classification problem with imperfect data. More precisely, it extends standard decision trees to handle uncertainty in both building and classification procedures. Uncertainty here is represented by means of possibility distributions. The first part investigates the issue of building decision trees from data with uncertain class values by developing a non-specificity b...

متن کامل

Cluster adaptive training with factorized decision trees for speech recognition

Cluster adaptive training (CAT) is a popular approach to train multiple-cluster HMMs for fast speaker adaptation in speech recognition. Traditionally, a cluster-independent decision tree is shared among all clusters, which could limit the modelling power of multiple-cluster HMMs. In this paper, each cluster is allowed to have its own decision tree. The intersections between the triphones subset...

متن کامل

Decision Trees for Decision Making

If the company builds a big plant, it must live with it whatever the size of market demand. If it builds a small plant, management has the option of expanding the plant in two years in the event that demand is high during the introductory period; while in the event that demand is low during the introductory period, the company will maintain operations in the small plant and make a tidy profit o...

متن کامل

A Kronecker-factored approximate Fisher matrix for convolution layers

Second-order optimization methods such as natural gradient descent have the potential to speed up training of neural networks by correcting for the curvature of the loss function. Unfortunately, the exact natural gradient is impractical to compute for large models, and most approximations either require an expensive iterative procedure or make crude approximations to the curvature. We present K...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the AAAI Conference on Artificial Intelligence

سال: 2020

ISSN: 2374-3468,2159-5399

DOI: 10.1609/aaai.v34i04.5801